9 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Temporal Coherence Strategies for Augmented Reality Labeling

    Get PDF

    Transitional Augmented Reality navigation for live captured scenes

    No full text
    Figure 1: After placing physical objects on a table, our system offers the possibility to structurally navigate an unprepared scene with a set of new transitional navigation techniques in AR (Left) and VR (Right) modes. The techniques seamlessly switch from AR to VR modes. Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simul-taneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unpre-pared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world envi-ronments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system

    Embedded Virtual Views for Augmented Reality Navigation

    No full text
    Figure 1: An embedded virtual view allows users to follow a guided route, without leaving the egocentric viewpoint. (a) The navigation aid turns around the corner and the user’s view is blocked by the building. (b) By extending the egocentric viewpoint with an additional embedded virtual view, the user is still able to perceive the route indicated by the navigation aid. Note that the additional view embeds the navigation aid correctly in the environment. In this paper, we present virtual embedded views used for turnbased pedestrian navigation in Augmented Reality (AR). Embedded views allow users to see around occluding structures and at the same time seamlessly integrate the augmented navigation aid into the otherwise occluded view. Users get a preview on upcoming route changes, without the need to consult an additional map view. We compare embedded views to other methods revealing the occluded navigation aids. We demonstrate that the technique is more screen-space efficient when compared to a typical x-ray vision technique, and may better facilitate the mental linking of information, when compared to a mirror
    corecore